381 research outputs found

    A Kind of Wireless Sensor Network Coverage Optimization Algorithm Based on Genetic PSO

    Get PDF
    Aiming to the problems of slow convergence speed and being easily early-maturing etc. in existing network based on standard particle swarm algorithm, this paper proposes a wireless sensor network coverage optimization algorithm based on the genetic particle swarm optimization (PSO). The wireless sensor maximum coverage is regarded as the objective function, the genetic algorithm with adaptive crossover and mutation factors is used to search the solution space, and the powerful global search ability of PSO is used to increase search scope to make particle covering more efficient, which both strengthen algorithm optimization ability, improve the nodes coverage, and solve early-mature problem. Comparing with the standard traditional genetic algorithm and new quantum genetic algorithm, simulation results present that the rate of coverage in this algorithm increases by 2.28 % and 0.65 % respectively; and convergence speed is also improved, therefore this method can effectively realize the wireless sensor network coverage optimization

    Building Damage, Death and Downtime Risk Attenuation in Earthquakes

    Get PDF
    Whether it is for pre-event prevention and preparedness or for post-event response and recovery of a catastrophic earthquake, estimates of damage, death and downtime (3d) losses are needed by engineers, owners, and policy makers. In this research, a quantitative "scenario-based" risk analysis approach as developed to investigate the 3d losses for buildings. The "Redbook Building" is taken as the typical New Zealand construction exemplar and analyzed for the 22 February 2011 Christchurch Earthquake. Losses are presented in the form of attenuation curves that also include the associated uncertainties. The spatial distribution of 3d damages over the height of buildings is also considered. It is thus shown that it is possible to discriminate between losses that lead to building replacement versus less severe losses that require structures to be repaired. The 3d loss results show that within the Christchurch city (17 km radial distance from the earthquake epicenter): (a) the expected physical damage loss ratio is about 50% of the property value; (b) the expected probability that someone is killed or seriously injured is about 4%; and (c) the expected downtime for the building being out of service is about 24 weeks. However, when considering various uncertainties, one can have 90% confidence that these loss estimations will be as high as: (a) complete loss (100% physical damage), implying structure has a great chance of collapse; (b) 8% possibility of fatality, implying deaths and significant injuries are likely; and (c) 1-year downtime due to post-event reconstruction demand surge. These informative results demonstrate that even though structures, such as the "Redbook Building", may have been well designed and constructed to contemporary standards, significant damage can still be expected and the downtime loss is particularly large. In order to solve this problem, new building structures should ideally be built stronger, include recentering attributes, and use Damage Avoidance Design (DAD) armoring connection details

    LatEval: An Interactive LLMs Evaluation Benchmark with Incomplete Information from Lateral Thinking Puzzles

    Full text link
    With the continuous evolution and refinement of LLMs, they are endowed with impressive logical reasoning or vertical thinking capabilities. But can they think out of the box? Do they possess proficient lateral thinking abilities? Following the setup of Lateral Thinking Puzzles, we propose a novel evaluation benchmark, LatEval, which assesses the model's lateral thinking within an interactive framework. In our benchmark, we challenge LLMs with 2 aspects: the quality of questions posed by the model and the model's capability to integrate information for problem-solving. We find that nearly all LLMs struggle with employing lateral thinking during interactions. For example, even the most advanced model, GPT-4, exhibits the advantage to some extent, yet still maintain a noticeable gap when compared to human. This evaluation benchmark provides LLMs with a highly challenging and distinctive task that is crucial to an effective AI assistant.Comment: Work in progres

    English Broadcast News Speech Recognition by Humans and Machines

    Full text link
    With recent advances in deep learning, considerable attention has been given to achieving automatic speech recognition performance close to human performance on tasks like conversational telephone speech (CTS) recognition. In this paper we evaluate the usefulness of these proposed techniques on broadcast news (BN), a similar challenging task. We also perform a set of recognition measurements to understand how close the achieved automatic speech recognition results are to human performance on this task. On two publicly available BN test sets, DEV04F and RT04, our speech recognition system using LSTM and residual network based acoustic models with a combination of n-gram and neural network language models performs at 6.5% and 5.9% word error rate. By achieving new performance milestones on these test sets, our experiments show that techniques developed on other related tasks, like CTS, can be transferred to achieve similar performance. In contrast, the best measured human recognition performance on these test sets is much lower, at 3.6% and 2.8% respectively, indicating that there is still room for new techniques and improvements in this space, to reach human performance levels.Comment: \copyright 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other work

    Bidirectional End-to-End Learning of Retriever-Reader Paradigm for Entity Linking

    Full text link
    Entity Linking (EL) is a fundamental task for Information Extraction and Knowledge Graphs. The general form of EL (i.e., end-to-end EL) aims to first find mentions in the given input document and then link the mentions to corresponding entities in a specific knowledge base. Recently, the paradigm of retriever-reader promotes the progress of end-to-end EL, benefiting from the advantages of dense entity retrieval and machine reading comprehension. However, the existing study only trains the retriever and the reader separately in a pipeline manner, which ignores the benefit that the interaction between the retriever and the reader can bring to the task. To advance the retriever-reader paradigm to perform more perfectly on end-to-end EL, we propose BEER2^2, a Bidirectional End-to-End training framework for Retriever and Reader. Through our designed bidirectional end-to-end training, BEER2^2 guides the retriever and the reader to learn from each other, make progress together, and ultimately improve EL performance. Extensive experiments on benchmarks of multiple domains demonstrate the effectiveness of our proposed BEER2^2.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl
    • …
    corecore